532 research outputs found

    Asymptotic Analysis of Inpainting via Universal Shearlet Systems

    Get PDF
    Recently introduced inpainting algorithms using a combination of applied harmonic analysis and compressed sensing have turned out to be very successful. One key ingredient is a carefully chosen representation system which provides (optimally) sparse approximations of the original image. Due to the common assumption that images are typically governed by anisotropic features, directional representation systems have often been utilized. One prominent example of this class are shearlets, which have the additional benefitallowing faithful implementations. Numerical results show that shearlets significantly outperform wavelets in inpainting tasks. One of those software packages, www.shearlab.org, even offers the flexibility of usingdifferent parameter for each scale, which is not yet covered by shearlet theory. In this paper, we first introduce universal shearlet systems which are associated with an arbitrary scaling sequence, thereby modeling the previously mentioned flexibility. In addition, this novel construction allows for a smooth transition between wavelets and shearlets and therefore enables us to analyze them in a uniform fashion. For a large class of such scaling sequences, we first prove that the associated universal shearlet systems form band-limited Parseval frames for L2(R2)L^2(\mathbb{R}^2) consisting of Schwartz functions. Secondly, we analyze the performance for inpainting of this class of universal shearlet systems within a distributional model situation using an â„“1\ell^1-analysis minimization algorithm for reconstruction. Our main result in this part states that, provided the scaling sequence is comparable to the size of the (scale-dependent) gap, nearly-perfect inpainting is achieved at sufficiently fine scales

    Robust 1-Bit Compressed Sensing via Hinge Loss Minimization

    Full text link
    This work theoretically studies the problem of estimating a structured high-dimensional signal x0∈Rnx_0 \in \mathbb{R}^n from noisy 11-bit Gaussian measurements. Our recovery approach is based on a simple convex program which uses the hinge loss function as data fidelity term. While such a risk minimization strategy is very natural to learn binary output models, such as in classification, its capacity to estimate a specific signal vector is largely unexplored. A major difficulty is that the hinge loss is just piecewise linear, so that its "curvature energy" is concentrated in a single point. This is substantially different from other popular loss functions considered in signal estimation, e.g., the square or logistic loss, which are at least locally strongly convex. It is therefore somewhat unexpected that we can still prove very similar types of recovery guarantees for the hinge loss estimator, even in the presence of strong noise. More specifically, our non-asymptotic error bounds show that stable and robust reconstruction of x0x_0 can be achieved with the optimal oversampling rate O(m−1/2)O(m^{-1/2}) in terms of the number of measurements mm. Moreover, we permit a wide class of structural assumptions on the ground truth signal, in the sense that x0x_0 can belong to an arbitrary bounded convex set K⊂RnK \subset \mathbb{R}^n. The proofs of our main results rely on some recent advances in statistical learning theory due to Mendelson. In particular, we invoke an adapted version of Mendelson's small ball method that allows us to establish a quadratic lower bound on the error of the first order Taylor approximation of the empirical hinge loss function

    â„“1\ell^1-Analysis Minimization and Generalized (Co-)Sparsity: When Does Recovery Succeed?

    Full text link
    This paper investigates the problem of signal estimation from undersampled noisy sub-Gaussian measurements under the assumption of a cosparse model. Based on generalized notions of sparsity, we derive novel recovery guarantees for the â„“1\ell^{1}-analysis basis pursuit, enabling highly accurate predictions of its sample complexity. The corresponding bounds on the number of required measurements do explicitly depend on the Gram matrix of the analysis operator and therefore particularly account for its mutual coherence structure. Our findings defy conventional wisdom which promotes the sparsity of analysis coefficients as the crucial quantity to study. In fact, this common paradigm breaks down completely in many situations of practical interest, for instance, when applying a redundant (multilevel) frame as analysis prior. By extensive numerical experiments, we demonstrate that, in contrast, our theoretical sampling-rate bounds reliably capture the recovery capability of various examples, such as redundant Haar wavelets systems, total variation, or random frames. The proofs of our main results build upon recent achievements in the convex geometry of data mining problems. More precisely, we establish a sophisticated upper bound on the conic Gaussian mean width that is associated with the underlying â„“1\ell^{1}-analysis polytope. Due to a novel localization argument, it turns out that the presented framework naturally extends to stable recovery, allowing us to incorporate compressible coefficient sequences as well

    Generic Error Bounds for the Generalized Lasso with Sub-Exponential Data

    Get PDF
    This work performs a non-asymptotic analysis of the generalized Lasso under the assumption of sub-exponential data. Our main results continue recent research on the benchmark case of (sub-)Gaussian sample distributions and thereby explore what conclusions are still valid when going beyond. While many statistical features of the generalized Lasso remain unaffected (e.g., consistency), the key difference becomes manifested in the way how the complexity of the hypothesis set is measured. It turns out that the estimation error can be controlled by means of two complexity parameters that arise naturally from a generic-chaining-based proof strategy. The output model can be non-realizable, while the only requirement for the input vector is a generic concentration inequality of Bernstein-type, which can be implemented for a variety of sub-exponential distributions. This abstract approach allows us to reproduce, unify, and extend previously known guarantees for the generalized Lasso. In particular, we present applications to semi-parametric output models and phase retrieval via the lifted Lasso. Moreover, our findings are discussed in the context of sparse recovery and high-dimensional estimation problems

    Compressed Sensing with 1D Total Variation: Breaking Sample Complexity Barriers via Non-Uniform Recovery (iTWIST'20)

    Get PDF
    This paper investigates total variation minimization in one spatial dimension for the recovery of gradient-sparse signals from undersampled Gaussian measurements. Recently established bounds for the required sampling rate state that uniform recovery of all ss-gradient-sparse signals in Rn\mathbb{R}^n is only possible with m≳sn⋅PolyLog(n)m \gtrsim \sqrt{s n} \cdot \text{PolyLog}(n) measurements. Such a condition is especially prohibitive for high-dimensional problems, where ss is much smaller than nn. However, previous empirical findings seem to indicate that the latter sampling rate does not reflect the typical behavior of total variation minimization. Indeed, this work provides a rigorous analysis that breaks the sn\sqrt{s n}-bottleneck for a large class of natural signals. The main result shows that non-uniform recovery succeeds with high probability for m≳s⋅PolyLog(n)m \gtrsim s \cdot \text{PolyLog}(n) measurements if the jump discontinuities of the signal vector are sufficiently well separated. In particular, this guarantee allows for signals arising from a discretization of piecewise constant functions defined on an interval. The present paper serves as a short summary of the main results in our recent work [arxiv:2001.09952].Comment: in Proceedings of iTWIST'20, Paper-ID: 32, Nantes, France, December, 2-4, 2020. arXiv admin note: substantial text overlap with arXiv:2001.0995

    Sparse Proteomics Analysis - A compressed sensing-based approach for feature selection and classification of high-dimensional proteomics mass spectrometry data

    Get PDF
    Background: High-throughput proteomics techniques, such as mass spectrometry (MS)-based approaches, produce very high-dimensional data-sets. In a clinical setting one is often interested in how mass spectra differ between patients of different classes, for example spectra from healthy patients vs. spectra from patients having a particular disease. Machine learning algorithms are needed to (a) identify these discriminating features and (b) classify unknown spectra based on this feature set. Since the acquired data is usually noisy, the algorithms should be robust against noise and outliers, while the identified feature set should be as small as possible. Results: We present a new algorithm, Sparse Proteomics Analysis (SPA), based on the theory of compressed sensing that allows us to identify a minimal discriminating set of features from mass spectrometry data-sets. We show (1) how our method performs on artificial and real-world data-sets, (2) that its performance is competitive with standard (and widely used) algorithms for analyzing proteomics data, and (3) that it is robust against random and systematic noise. We further demonstrate the applicability of our algorithm to two previously published clinical data-sets
    • …
    corecore